High-quality traffic flow generation is the core module in building simulators for autonomous driving. However, the majority of available simulators are incapable of replicating traffic patterns that accurately reflect the various features of real-world data while also simulating human-like reactive responses to the tested autopilot driving strategies. Taking one step forward to addressing such a problem, we propose Realistic Interactive TrAffic flow (RITA) as an integrated component of existing driving simulators to provide high-quality traffic flow for the evaluation and optimization of the tested driving strategies. RITA is developed with fidelity, diversity, and controllability in consideration, and consists of two core modules called RITABackend and RITAKit. RITABackend is built to support vehicle-wise control and provide traffic generation models from real-world datasets, while RITAKit is developed with easy-to-use interfaces for controllable traffic generation via RITABackend. We demonstrate RITA's capacity to create diversified and high-fidelity traffic simulations in several highly interactive highway scenarios. The experimental findings demonstrate that our produced RITA traffic flows meet all three design goals, hence enhancing the completeness of driving strategy evaluation. Moreover, we showcase the possibility for further improvement of baseline strategies through online fine-tuning with RITA traffic flows.
translated by 谷歌翻译
神经体积表示表明,MLP网络可以通过多视图校准图像来训练MLP网络,以表示场景的几何形状和外观,而无需显式3D监督。对象分割可以根据学习的辐射字段丰富许多下游应用程序。但是,引入手工制作的细分以在复杂的现实世界中定义感兴趣的区域是非平凡且昂贵的,因为它获得了每个视图注释。本文使用NERF进行复杂的现实世界场景来探索对物体分割的自我监督学习。我们的框架,nerf-sos,夫妻对象分割和神经辐射字段,以在场景中的任何视图中分割对象。通过提出一种新颖的合作对比度损失,在外观和几何水平上,NERF-SOS鼓励NERF模型将紧凑的几何学分割簇从其密度字段中提炼出紧凑的几何学分割簇以及自我监督的预训练的预训练的2D视觉特征。可以将自我监督的对象分割框架应用于各种NERF模型,这些模型既可以导致室内和室外场景的照片真实的渲染结果和令人信服的分割。 LLFF,坦克和寺庙数据集的广泛结果验证了NERF-SOS的有效性。它始终超过其他基于图像的自我监督基线,甚至比监督的语义nerf捕捉细节。
translated by 谷歌翻译
通过隐式表示表示视觉信号(例如,基于坐标的深网)在许多视觉任务中都占了上风。这项工作探讨了一个新的有趣的方向:使用可以适用于各种2D和3D场景的广义方法训练风格化的隐式表示。我们对各种隐式函数进行了试点研究,包括基于2D坐标的表示,神经辐射场和签名距离函数。我们的解决方案是一个统一的隐式神经风化框架,称为INS。与Vanilla隐式表示相反,INS将普通隐式函数分解为样式隐式模块和内容隐式模块,以便从样式图像和输入场景中分别编码表示表示。然后,应用合并模块来汇总这些信息并合成样式化的输出。为了使3D场景中的几何形状进行正规化,我们提出了一种新颖的自我鉴定几何形状一致性损失,该损失保留了风格化场景的几何忠诚度。全面的实验是在多个任务设置上进行的,包括对复杂场景的新型综合,隐式表面的风格化以及使用MLP拟合图像。我们进一步证明,学到的表示不仅是连续的,而且在风格上都是连续的,从而导致不同样式之间毫不费力地插值,并以新的混合样式生成图像。请参阅我们的项目页面上的视频以获取更多查看综合结果:https://zhiwenfan.github.io/ins。
translated by 谷歌翻译
高效的视频架构是在具有有限计算资源的设备上部署视频识别系统的关键。不幸的是,现有的视频架构通常是计算密集的,不适合这些应用。最近的X3D工作通过沿着多个轴扩展手工制作的图像架构,介绍了一系列高效的视频模型系列,例如空间,时间,宽度和深度。虽然在概念上的大空间中操作,但x3d一次搜索一个轴,并且仅探索了一组总共30个架构,这不足以探索空间。本文绕过了现有的2D架构,并直接搜索了一个细粒度空间中的3D架构,其中共同搜索了块类型,滤波器编号,扩展比和注意力块。采用概率性神经结构搜索方法来有效地搜索如此大的空间。动力学和某事物的评估 - 某事-V2基准确认我们的AutoX3D模型在类似的拖鞋中的准确性高达1.3%的准确性优于现有的模型,并在达到类似的性能时降低计算成本高达X1.74。
translated by 谷歌翻译
This work targets designing a principled and unified training-free framework for Neural Architecture Search (NAS), with high performance, low cost, and in-depth interpretation. NAS has been explosively studied to automate the discovery of top-performer neural networks, but suffers from heavy resource consumption and often incurs search bias due to truncated training or approximations. Recent NAS works start to explore indicators that can predict a network's performance without training. However, they either leveraged limited properties of deep networks, or the benefits of their training-free indicators are not applied to more extensive search methods. By rigorous correlation analysis, we present a unified framework to understand and accelerate NAS, by disentangling "TEG" characteristics of searched networks - Trainability, Expressivity, Generalization - all assessed in a training-free manner. The TEG indicators could be scaled up and integrated with various NAS search methods, including both supernet and single-path approaches. Extensive studies validate the effective and efficient guidance from our TEG-NAS framework, leading to both improved search accuracy and over 56% reduction in search time cost. Moreover, we visualize search trajectories on three landscapes of "TEG" characteristics, observing that while a good local minimum is easier to find on NAS-Bench-201 given its simple topology, balancing "TEG" characteristics is much harder on the DARTS search space due to its complex landscape geometry. Our code is available at https://github.com/VITA-Group/TEGNAS.
translated by 谷歌翻译
Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data? As one such example, this paper explores the low-light image enhancement problem, where in practice it is extremely challenging to simultaneously take a low-light and a normal-light photo of the same visual scene. We propose a highly effective unsupervised generative adversarial network, dubbed Enlight-enGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images. Instead of supervising the learning using ground truth data, we propose to regularize the unpaired training using the information extracted from the input itself, and benchmark a series of innovations for the low-light image enhancement problem, including a global-local discriminator structure, a selfregularized perceptual loss fusion, and the attention mechanism. Through extensive experiments, our proposed approach outperforms recent methods under a variety of metrics in terms of visual quality and subjective user study. Thanks to the great flexibility brought by unpaired training, EnlightenGAN is demonstrated to be easily adaptable to enhancing real-world images from various domains. Our codes and pre-trained models are available at: https://github.com/VITA-Group/EnlightenGAN.
translated by 谷歌翻译
Brain midline shift (MLS) is one of the most critical factors to be considered for clinical diagnosis and treatment decision-making for intracranial hemorrhage. Existing computational methods on MLS quantification not only require intensive labeling in millimeter-level measurement but also suffer from poor performance due to their dependence on specific landmarks or simplified anatomical assumptions. In this paper, we propose a novel semi-supervised framework to accurately measure the scale of MLS from head CT scans. We formulate the MLS measurement task as a deformation estimation problem and solve it using a few MLS slices with sparse labels. Meanwhile, with the help of diffusion models, we are able to use a great number of unlabeled MLS data and 2793 non-MLS cases for representation learning and regularization. The extracted representation reflects how the image is different from a non-MLS image and regularization serves an important role in the sparse-to-dense refinement of the deformation field. Our experiment on a real clinical brain hemorrhage dataset has achieved state-of-the-art performance and can generate interpretable deformation fields.
translated by 谷歌翻译
Biological systems in nature have evolved for millions of years to adapt and survive the environment. Many features they developed can be inspirational and beneficial for solving technical problems in modern industries. This leads to a specific form of design-by-analogy called bio-inspired design (BID). Although BID as a design method has been proven beneficial, the gap between biology and engineering continuously hinders designers from effectively applying the method. Therefore, we explore the recent advance of artificial intelligence (AI) for a data-driven approach to bridge the gap. This paper proposes a generative design approach based on the generative pre-trained language model (PLM) to automatically retrieve and map biological analogy and generate BID in the form of natural language. The latest generative pre-trained transformer, namely GPT-3, is used as the base PLM. Three types of design concept generators are identified and fine-tuned from the PLM according to the looseness of the problem space representation. Machine evaluators are also fine-tuned to assess the mapping relevancy between the domains within the generated BID concepts. The approach is evaluated and then employed in a real-world project of designing light-weighted flying cars during its conceptual design phase The results show our approach can generate BID concepts with good performance.
translated by 谷歌翻译
Current mainstream object detection methods for large aerial images usually divide large images into patches and then exhaustively detect the objects of interest on all patches, no matter whether there exist objects or not. This paradigm, although effective, is inefficient because the detectors have to go through all patches, severely hindering the inference speed. This paper presents an Objectness Activation Network (OAN) to help detectors focus on fewer patches but achieve more efficient inference and more accurate results, enabling a simple and effective solution to object detection in large images. In brief, OAN is a light fully-convolutional network for judging whether each patch contains objects or not, which can be easily integrated into many object detectors and jointly trained with them end-to-end. We extensively evaluate our OAN with five advanced detectors. Using OAN, all five detectors acquire more than 30.0% speed-up on three large-scale aerial image datasets, meanwhile with consistent accuracy improvements. On extremely large Gaofen-2 images (29200$\times$27620 pixels), our OAN improves the detection speed by 70.5%. Moreover, we extend our OAN to driving-scene object detection and 4K video object detection, boosting the detection speed by 112.1% and 75.0%, respectively, without sacrificing the accuracy. Code is available at https://github.com/Ranchosky/OAN.
translated by 谷歌翻译
With the ever-growing model size and the limited availability of labeled training data, transfer learning has become an increasingly popular approach in many science and engineering domains. For classification problems, this work delves into the mystery of transfer learning through an intriguing phenomenon termed neural collapse (NC), where the last-layer features and classifiers of learned deep networks satisfy: (i) the within-class variability of the features collapses to zero, and (ii) the between-class feature means are maximally and equally separated. Through the lens of NC, our findings for transfer learning are the following: (i) when pre-training models, preventing intra-class variability collapse (to a certain extent) better preserves the intrinsic structures of the input data, so that it leads to better model transferability; (ii) when fine-tuning models on downstream tasks, obtaining features with more NC on downstream data results in better test accuracy on the given task. The above results not only demystify many widely used heuristics in model pre-training (e.g., data augmentation, projection head, self-supervised learning), but also leads to more efficient and principled fine-tuning method on downstream tasks that we demonstrate through extensive experimental results.
translated by 谷歌翻译